44 research outputs found

    Surface-bounded growth modeling applied to human mandibles

    Get PDF
    From a set of longitudinal three-dimensional scans of the same anatomical structure, we have accurately modeled the temporal shape and size changes using a linear shape model. On a total of 31 computed tomography scans of the mandible from six patients, 14851 semilandmarks are found automatically using shape features and a new algorithm called geometry-constrained diffusion. The semilandmarks are mapped into Procrustes space. Principal component analysis extracts a one-dimensional subspace, which is used to construct a linear growth model. The worst case mean modeling error in a cross validation study is 3.7 mm

    Multispectral Imaging for Determination of Astaxanthin Concentration in Salmonids

    Get PDF
    Multispectral imaging has been evaluated for characterization of the concentration of a specific cartenoid pigment; astaxanthin. 59 fillets of rainbow trout, Oncorhynchus mykiss, were filleted and imaged using a rapid multispectral imaging device for quantitative analysis. The multispectral imaging device captures reflection properties in 19 distinct wavelength bands, prior to determination of the true concentration of astaxanthin. The samples ranged from 0.20 to 4.34 g per g fish. A PLSR model was calibrated to predict astaxanthin concentration from novel images, and showed good results with a RMSEP of 0.27. For comparison a similar model were built for normal color images, which yielded a RMSEP of 0.45. The acquisition speed of the multispectral imaging system and the accuracy of the PLSR model obtained suggest this method as a promising technique for rapid in-line estimation of astaxanthin concentration in rainbow trout fillets

    Registration and shape modelling of porcine bone structures via CT

    No full text
    shape model of bone structures connected to the pelvic bone, fig. 1(a), is built. The shape model is used by the Danish Meat Research Institute (DMRI) to optimize and validate the functionality of a specific tool in a slaughterhouse robot currently being developed. The data consists of 2D CT scans, fig. 1(b), of 40 porcine carcasses separated along the medial plane. Each scan has a slice thickness of 10mm with a spacing of 10mm between each scan. Voxel dimensions are [x, y, z] = [0.88, 0.88, 10]mm. The full length of the carcasses is scanned resulting in approximately 130 scans per carcass, but only 30 scans per carcass are used in this application, covering the parts around the region of the pelvic bone. Extracting corresponding points on a 3D shape from 2D scans is a tedious and difficult task, calling for (semi-)automated methods. Standard tresholding techniques combined with morphology ensures a robust segmentation of bone contours in the 2D scans. Points on the contours, fig. 1(c), are used as 3D constraints in the reconstruction of the bone surfaces using variational interpolation and radial basis functions (RBF) [3]. Due to the massive amount of data, each contour is approximated using fourier basis functions and then resampled. Only points along the contours having high curvature are selected to be constraints on the 3D surface. The implicit surface, fig. 1(d), is the

    Diagnostic decision rule for support in clinical assessment of the need for surgical intervention in horses with acute abdominal pain

    No full text
    A prospective survey of horses with colic referred to a university hospital was undertaken to elaborate on a simple clinical decision support system capable of predicting whether or not horses require surgical intervention. Cases were classified as requiring surgical intervention or not on the basis of intraoperative findings or necropsy reports. Logistic regression analysis was applied to identify predictors with the strongest association with treatment needed. The classification and regression tree (CART) methodology was used to combine the variables in a simple classification system. The performance of the elaborated algorithms, as diagnostic instruments, was recorded as test sensitivity and specificity. The CART method generated 5 different classification trees with a similar basic structure consisting of: degree of pain, peritoneal fluid colour, and rectal temperature. The tree, constructed at a prevalence of 15% surgical cases, appeared to be the best proposal made by CART. In this classification tree, further discrimination of cases was obtained by including the findings of rectal examination and packed cell volume. When regarded as a test system, the sensitivity and specificity was 52% and 95%, respectively, corresponding to positive and negative predictive values of 68% and 91%. The variables examined in the present study did not provide a safe clinical decision rule. The classification tree constructed at 15% surgical cases was considered feasible, the proportion of horses incorrectly predicted to be without need of immediate surgery (false negatives) was small, whereas the proportion of horses incorrectly predicted to be in need of immediate surgery (false positives) was large. Some of the false positive horses were amenable to surgical treatment, although these cases did not conform to the strict definition of a surgical case. A less rigorous definition of a surgical case than that used in the present study would lower the percentage of false positives

    Wedgelet Enhanced Appearance Models

    No full text
    Abstract — Statistical region-based segmentation methods such as the Active Appearance Model (AAM) are used for establishing dense correspondences in images based on learning the variation in shape and pixel intensities in a training set. For low resolution 2D images correspondences can be recovered reliably in realtime. However, as resolution increases this becomes infeasible due to excessive storage and computational requirements. In this paper we propose to reduce the textural components by modelling the coefficients of a wedgelet based regression tree instead of the original pixel intensities. The wedgelet regression trees employed are based on triangular domains and estimated using cross validation. The wedgelet regression trees are functional descriptions of the intensity information and serve to 1) reduce noise and 2) produce a compact textural description. The wedgelet enhanced appearance model is applied to a case study of human faces. Compression ratios of the texture information of 1:40 is obtained without sacrificing segmentation accuracy notably, even at compression ratios of 1:150 fair segmentation is achieved. I

    A Generalized Linear Joint Trained Framework for Semi-Supervised Learning of Sparse Features

    Get PDF
    The elastic net is among the most widely used types of regularization algorithms, commonly associated with the problem of supervised generalized linear model estimation via penalized maximum likelihood. Its attractive properties, originated from a combination of ℓ1 and ℓ2 norms, endow this method with the ability to select variables, taking into account the correlations between them. In the last few years, semi-supervised approaches that use both labeled and unlabeled data have become an important component in statistical research. Despite this interest, few researchers have investigated semi-supervised elastic net extensions. This paper introduces a novel solution for semi-supervised learning of sparse features in the context of generalized linear model estimation: the generalized semi-supervised elastic net (s2net), which extends the supervised elastic net method, with a general mathematical formulation that covers, but is not limited to, both regression and classification problems. In addition, a flexible and fast implementation for s2net is provided. Its advantages are illustrated in different experiments using real and synthetic data sets. They show how s2net improves the performance of other techniques that have been proposed for both supervised and semi-supervised learning
    corecore